751 research outputs found

    Network-analysis-guided synthesis of weisaconitine D and liljestrandinine.

    Get PDF
    General strategies for the chemical synthesis of organic compounds, especially of architecturally complex natural products, are not easily identified. Here we present a method to establish a strategy for such syntheses, which uses network analysis. This approach has led to the identification of a versatile synthetic intermediate that facilitated syntheses of the diterpenoid alkaloids weisaconitine D and liljestrandinine, and the core of gomandonine. We also developed a web-based graphing program that allows network analysis to be easily performed on molecules with complex frameworks. The diterpenoid alkaloids comprise some of the most architecturally complex and functional-group-dense secondary metabolites isolated. Consequently, they present a substantial challenge for chemical synthesis. The synthesis approach described here is a notable departure from other single-target-focused strategies adopted for the syntheses of related structures. Specifically, it affords not only the targeted natural products, but also intermediates and derivatives in the three families of diterpenoid alkaloids (C-18, C-19 and C-20), and so provides a unified synthetic strategy for these natural products. This work validates the utility of network analysis as a starting point for identifying strategies for the syntheses of architecturally complex secondary metabolites

    Minimizing Technological Oversights: A Marketing Research Perspective

    Get PDF
    Technological advances provide vast opportunities for new product development. Some technologies are transformed into successful new products; others are not. In this paper we investigate the role that marketing research methods as currently conceived can play in aligning marketplace needs with technological potential. We discuss the types of opportunities that new technologies present to the marketplace and why the existing set of market research methods are insufficient to assess the potential for all of these new technologies. We then discuss some emerging, non-traditional marketing research methods and assess their potential for addressing the technological oversights problem. We conclude with implications for academics and for practitioners

    Probabilistic Disease Classification of Expression-Dependent Proteomic Data from Mass Spectrometry of Human Serum

    Get PDF
    We have developed an algorithm called Q5 for probabilistic classification of healthy vs. disease whole serum samples using mass spectrometry. The algorithm employs Principal Components Analysis (PCA) followed by Linear Discriminant Analysis (LDA) on whole spectrum Surface-Enhanced Laser Desorption/Ionization Time of Flight (SELDI-TOF) Mass Spectrometry (MS) data, and is demonstrated on four real datasets from complete, complex SELDI spectra of human blood serum. Q5 is a closed-form, exact solution to the problem of classification of complete mass spectra of a complex protein mixture. Q5 employs a novel probabilistic classification algorithm built upon a dimension-reduced linear discriminant analysis. Our solution is computationally efficient; it is non-iterative and computes the optimal linear discriminant using closed-form equations. The optimal discriminant is computed and verified for datasets of complete, complex SELDI spectra of human blood serum. Replicate experiments of different training/testing splits of each dataset are employed to verify robustness of the algorithm. The probabilistic classification method achieves excellent performance. We achieve sensitivity, specificity, and positive predictive values above 97% on three ovarian cancer datasets and one prostate cancer dataset. The Q5 method outperforms previous full-spectrum complex sample spectral classification techniques, and can provide clues as to the molecular identities of differentially-expressed proteins and peptides

    The sales lead black hole: On sales reps' follow-up of marketing leads

    Full text link
    The sales lead black hole-the 70% of leads generated by marketing departments that sales representatives do not pursue-may result from competing demands on sales reps' time. Using the motivation-opportunity-ability framework, the authors consider factors that influence sales reps' pursuit (or lack thereof) of marketing and selfgenerated leads. The proportion of time that sales reps devote to marketing leads depends on organizational lead prequalification and managerial tracking processes (extrinsic motivation), as well as marketing lead volume (opportunity), and sales rep experience and performance (ability). Consistent with a person-situation framework, individual sales rep factors should also moderate the influence of organizational processes on lead follow-up. Data from 461 sales reps employed by four firms confirm that as sales reps' experience increases, their responses to managerial tracking of lead follow-up and marketing lead volume decrease; responses to the quality of the lead prequalification process increase. As sales reps' performance improves, their response to the volume of marketing leads increases, but their response to managerial tracking decreases. The interplay of individual sales reps' abilities and organizational marketing and sales processes explains differences in sales reps' follow-up of marketing leads.' © 2013, American Marketing Association

    High-Throughput Inference of Protein-Protein Interaction Sites from Unassigned NMR Data by Analyzing Arrangements Induced By Quadratic Forms on 3-Manifolds

    Get PDF
    We cast the problem of identifying protein-protein interfaces, using only unassigned NMR spectra, into a geometric clustering problem. Identifying protein-protein interfaces is critical to understanding inter- and intra-cellular communication, and NMR allows the study of protein interaction in solution. However it is often the case that NMR studies of a protein complex are very time-consuming, mainly due to the bottleneck in assigning the chemical shifts, even if the apo structures of the constituent proteins are known. We study whether it is possible, in a high-throughput manner, to identify the interface region of a protein complex using only unassigned chemical shift and residual dipolar coupling (RDC) data. We introduce a geometric optimization problem where we must cluster the cells in an arrangement on the boundary of a 3-manifold. The arrangement is induced by a spherical quadratic form, which in turn is parameterized by SO(3)xR^2. We show that this formalism derives directly from the physics of RDCs. We present an optimal algorithm for this problem that runs in O(n^3 log n) time for an n-residue protein. We then use this clustering algorithm as a subroutine in a practical algorithm for identifying the interface region of a protein complex from unassigned NMR data. We present the results of our algorithm on NMR data for 7 proteins from 5 protein complexes and show that our approach is useful for high-throughput applications in which we seek to rapidly identify the interface region of a protein complex

    The Hyponatremic Hypertensive Syndrome in a Preterm Infant: A Case of Severe Hyponatremia with Neurological Sequels

    Get PDF
    Objective. To report the irreversible severe neurological symptoms following the hyponatremic hypertensive syndrome (HHS) in an infant after umbilical arterial catheterization. Design. Case report with review of the literature. Setting. Neonatal intensive care unit at a tertiary care children's hospital. Patient. A three-week-old preterm infant. Conclusions. In evaluating a neonate with hyponatremia and hypertension, HHS should be considered, especially in case of umbilical arterial catheterization. In case of diagnostic delay, there is a risk of severe irreversible neurological damage

    Nonnegative principal component analysis for mass spectral serum profiles and biomarker discovery

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>As a novel cancer diagnostic paradigm, mass spectroscopic serum proteomic pattern diagnostics was reported superior to the conventional serologic cancer biomarkers. However, its clinical use is not fully validated yet. An important factor to prevent this young technology to become a mainstream cancer diagnostic paradigm is that robustly identifying cancer molecular patterns from high-dimensional protein expression data is still a challenge in machine learning and oncology research. As a well-established dimension reduction technique, PCA is widely integrated in pattern recognition analysis to discover cancer molecular patterns. However, its global feature selection mechanism prevents it from capturing local features. This may lead to difficulty in achieving high-performance proteomic pattern discovery, because only features interpreting global data behavior are used to train a learning machine.</p> <p>Methods</p> <p>In this study, we develop a nonnegative principal component analysis algorithm and present a nonnegative principal component analysis based support vector machine algorithm with sparse coding to conduct a high-performance proteomic pattern classification. Moreover, we also propose a nonnegative principal component analysis based filter-wrapper biomarker capturing algorithm for mass spectral serum profiles.</p> <p>Results</p> <p>We demonstrate the superiority of the proposed algorithm by comparison with six peer algorithms on four benchmark datasets. Moreover, we illustrate that nonnegative principal component analysis can be effectively used to capture meaningful biomarkers.</p> <p>Conclusion</p> <p>Our analysis suggests that nonnegative principal component analysis effectively conduct local feature selection for mass spectral profiles and contribute to improving sensitivities and specificities in the following classification, and meaningful biomarker discovery.</p

    Good practices for a literature survey are not followed by authors while preparing scientific manuscripts

    Full text link
    The number of citations received by authors in scientific journals has become a major parameter to assess individual researchers and the journals themselves through the impact factor. A fair assessment therefore requires that the criteria for selecting references in a given manuscript should be unbiased with respect to the authors or the journals cited. In this paper, we advocate that authors should follow two mandatory principles to select papers (later reflected in the list of references) while studying the literature for a given research: i) consider similarity of content with the topics investigated, lest very related work should be reproduced or ignored; ii) perform a systematic search over the network of citations including seminal or very related papers. We use formalisms of complex networks for two datasets of papers from the arXiv repository to show that neither of these two criteria is fulfilled in practice
    corecore